New Times,
New Thinking.

  1. Spotlight on Policy
7 June 2019updated 07 Jun 2021 5:27pm

Why all AI must be ethical AI

By Jo Swinson

Diagnosing rare genetic disorders is difficult. Because cases are few and far between, it makes it harder to train medical professionals in what to look for. This is precisely the kind of activity that artificial intelligence can make easier.

A new app called Face2Gene is giving doctors a second opinion on their diagnoses, using machine learning and neural networks. It looks for certain tell-tale facial features and presents doctors with a likely list of congenital and neurodevelopmental disorders. And as healthcare AI continues to develop, it will lead to more accurate and faster diagnoses, while freeing up doctors and nurses to spend more time with their patients, providing the care, advice and empathy that algorithms can’t.

Advances in AI have the potential to change our lives in ways we previously thought impossible. Whether it’s solving the climate emergency, improving healthcare and care for the elderly, or making transport more efficient, the technological revolution ahead of us should be, and can be, harnessed for good. But, as well as the awesome examples of artificial intelligence in action, we hear worrying reports of how this technology is being abused too. Facial recognition can lead to better medical outcomes, but it can also be used for monitoring.

Just last month, a man was fined £90 because he refused to show his face to police trialling new facial recognition software in east London, in a modern-day equivalent of Harry Willcock refusing to show his ID card. As a liberal, I empathise with that man’s attempt to protect himself from a blatant invasion of his privacy, with no consent or understanding how that data would be used.

We are also at risk of imbuing this technology with divine-like abilities to predict the future and assume that it will make better decisions than a lowly human being. The danger is we view AI as a crystal ball that can look into the future. In reality, all it does is reflect the information we feed it. It is more like a mirror that we are holding up to society.

And society isn’t all that pretty, so we shouldn’t be surprised that the data we are using to train these systems lead to entrenching the biases that we are still trying to eradicate: sexism, racism, and all other forms of discrimination. Last year, Amazon had to stop using an algorithm in its recruitment process when it realised that, in an attempt to reflect the current male-heavy workforce at the company, it was penalising women.

As certain as I am that these technologies can make our lives infinitely better, I am also certain that they will throw up fundamental questions about the values we want to live by. And as these technologies transcend national borders, there will be a clash between western, liberal values that centre on individual freedoms and cultures where the collective good is placed above individual rights.

Give a gift subscription to the New Statesman this Christmas from just £49

Given how heavily China is investing in AI, that is a battle liberalism stands to lose unless we create a global alliance committed to the ethical application of AI, which protects privacy and other fundamental freedoms and rights. The UK should be at the heart of those efforts and we should be leading by example.

The UK can and must become a world leader in ethical AI. We are already off to a good start. There is a plethora of organisations which are looking at how we ensure AI systems are transparent, accountable, fair and secure. And from the conversations I have been having in recent months with world-leading experts and businesses, through the Lib Dem Technology Commission, there is significant co-operation on tackling the challenges ahead.

The government has also recently set up its Centre for Data Ethics and Innovation. Though it’s a welcome step, the centre’s funding is woefully low and it’s too early to judge if it will have the powers it needs to steer the development and deployment of AI.

There is more we can do. We should start with a frank and informed public debate on how we develop and deploy new technologies. In Montreal, for example, there is an institute that organises public meet-ups to inform policy development on how AI can be ethical, safe and inclusive. That’s something the Centre for Data Ethics and Innovation in the UK should consider as a model.

Just because AI can change every aspect of our life and how we interact with each other, with business and with government, it doesn’t mean it should. If, for example, someone’s freedom is at stake, do we want that decision to be made by a human being who can be held accountable or by an algorithm?

That’s a debate we need to have, and the answer won’t be clear cut. I can definitely see the merit in an algorithm that can produce fast and accurate analysis of case law in a trial, but it’s a different matter if the algorithm is in charge of making the decision.

We should also appreciate just how fast technology is evolving and the pressures that places on our regulators. All our regulatory bodies need to have a solid understanding of where technology is headed and how it affects them. But we should also create the structures for these bodies to be able to share intelligence and use consistent definitions – that will help close the loopholes and business will appreciate the clarity.

And finally, the government needs to set an example for the ethical use of AI across public services. Examples like the use of facial recognition by the police that I mentioned at the start will create unhelpful public backlash and create mistrust. Government and public services need to be transparent about how they are already using AI and involve the public in determining how it should be used in the future.

The opportunities that AI offers are immense. We are so lucky to be living in this age of technological wonder that hopefully holds the answers to the biggest challenges humanity faces. And the UK is in prime position to capitalise on these developments. To succeed, we need to bring the public along with us and we need to join forces with other countries, like Canada and our allies in the European Union, to lead the charge for ethical AI that makes our lives better without sacrificing our liberal values of fairness, privacy and transparency.

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football